392 research outputs found

    Expansion of the effective action around non-Gaussian theories

    Full text link
    This paper derives the Feynman rules for the diagrammatic perturbation expansion of the effective action around an arbitrary solvable problem. The perturbation expansion around a Gaussian theory is well known and composed of one-line irreducible diagrams only. For the expansions around an arbitrary, non-Gaussian problem, we show that a more general class of irreducible diagrams remains in addition to a second set of diagrams that has no analogue in the Gaussian case. The effective action is central to field theory, in particular to the study of phase transitions, symmetry breaking, effective equations of motion, and renormalization. We exemplify the method on the Ising model, where the effective action amounts to the Gibbs free energy, recovering the Thouless-Anderson-Palmer mean-field theory in a fully diagrammatic derivation. Higher order corrections follow with only minimal effort compared to existing techniques. Our results show further that the Plefka expansion and the high-temperature expansion are special cases of the general formalism presented here.Comment: 37 pages, published versio

    Locking of correlated neural activity to ongoing oscillations

    Full text link
    Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.Comment: 57 pages, 12 figures, published versio

    The perfect integrator driven by Poisson input and its approximation in the diffusion limit

    Get PDF
    In this note we consider the perfect integrator driven by Poisson process input. We derive its equilibrium and response properties and contrast them to the approximations obtained by applying the diffusion approximation. In particular, the probability density in the vicinity of the threshold differs, which leads to altered response properties of the system in equilibrium.Comment: 7 pages, 3 figures, v2: corrected authors in referenc

    Momentum-dependence in the infinitesimal Wilsonian renormalization group

    Full text link
    Wilson's original formulation of the renormalization group is perturbative in nature. We here present an alternative derivation of the infinitesimal momentum shell RG, akin to the Wegner and Houghton scheme, that is a priori exact. We show that the momentum-dependence of vertices is key to obtain a diagrammatic framework that has the same one-loop structure as the vertex expansion of the Wetterich equation. Momentum dependence leads to a delayed functional differential equation in the cutoff parameter. Approximations are then made at two points: truncation of the vertex expansion and approximating the functional form of the momentum dependence by a momentum-scale expansion. We exemplify the method on the scalar φ4\varphi^{4}-theory, computing analytically the Wilson-Fisher fixed point, its anomalous dimension η(d)\eta(d) and the critical exponent ν(d)\nu(d) non-perturbatively in d[3,4]d\in[3,4] dimensions. The results are in reasonable agreement with the known values, despite the simplicity of the method

    A reaction diffusion-like formalism for plastic neural networks reveals dissipative solitons at criticality

    Get PDF
    Self-organized structures in networks with spike-timing dependent plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the non-linearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime meta-stable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    Beyond-mean-field theory for the statistics of neural coordination

    Full text link
    Understanding the coordination structure of neurons in neuronal networks is essential for unraveling the distributed information processing mechanisms in brain networks. Recent advancements in measurement techniques have resulted in an increasing amount of data on neural activities recorded in parallel, revealing largely heterogeneous correlation patterns across neurons. Yet, the mechanistic origin of this heterogeneity is largely unknown because existing theoretical approaches linking structure and dynamics in neural circuits are mostly restricted to average connection patterns. Here we present a systematic inclusion of variability in network connectivity via tools from statistical physics of disordered systems. We study networks of spiking leaky integrate-and-fire neurons and employ mean-field and linear-response methods to map the spiking networks to linear rate models with an equivalent neuron-resolved correlation structure. The latter models can be formulated in a field-theoretic language that allows using disorder-average and replica techniques to systematically derive quantitatively matching beyond-mean-field predictions for the mean and variance of cross-covariances as functions of the average and variability of connection patterns. We show that heterogeneity in covariances is not a result of variability in single-neuron firing statistics but stems from the sparse realization and variable strength of connections, as ubiquitously observed in brain networks. Average correlations between neurons are found to be insensitive to the level of heterogeneity, which in contrast modulates the variability of covariances across many orders of magnitude, giving rise to an efficient tuning of the complexity of coordination patterns in neuronal circuits

    Equilibrium and Response Properties of the Integrate-and-Fire Neuron in Discrete Time

    Get PDF
    The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron's equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input
    corecore